depth information
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Asia > Japan > Honshū > Chūbu > Ishikawa Prefecture > Kanazawa (0.04)
- Asia > Japan > Honshū > Chūbu > Ishikawa Prefecture > Kanazawa (0.04)
- North America > United States > Illinois > Champaign County > Urbana (0.04)
- Europe > Austria > Vienna (0.14)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- (21 more...)
- Education > Curriculum > Subject-Specific Education (0.99)
- Health & Medicine (0.93)
- Europe > Austria > Vienna (0.14)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- (22 more...)
- Health & Medicine (0.93)
- Education > Curriculum (0.37)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Asia > Japan > Honshū > Chūbu > Ishikawa Prefecture > Kanazawa (0.04)
- Asia > Japan > Honshū > Chūbu > Ishikawa Prefecture > Kanazawa (0.04)
- North America > United States > Illinois > Champaign County > Urbana (0.04)
GPS Denied IBVS-Based Navigation and Collision Avoidance of UAV Using a Low-Cost RGB Camera
Wang, Xiaoyu, Tan, Yan Rui, Leong, William, Huang, Sunan, Teo, Rodney, Xiang, Cheng
Abstract-- This paper proposes an image-based visual ser-voing (IBVS) framework for UA V navigation and collision avoidance using only an RGB camera. While UA V navigation has been extensively studied, it remains challenging to apply IBVS in missions involving multiple visual targets and collision avoidance. The proposed method achieves navigation without explicit path planning, and collision avoidance is realized through AI-based monocular depth estimation from RGB images. Unlike approaches that rely on stereo cameras or external workstations, our framework runs fully onboard a Jetson platform, ensuring a self-contained and deployable system. Experimental results validate that the UA V can navigate across multiple AprilT ags and avoid obstacles effectively in GPS-denied environments. I. INTRODUCTION Most UA V applications depend on position estimation provided by global positioning systems (GPS). However, GPS is often unavailable in indoor, mountainous, or forest environments, motivating the use of computer vision for UA V navigation. This paper focuses on image-based visual servoing (IBVS) with an onboard RGB camera.
- North America > United States > Florida > Orange County > Orlando (0.04)
- Europe > Greece > Crete > Chania (0.04)
- Asia > Singapore > Central Region > Singapore (0.04)
- Asia > Japan (0.04)
- Transportation > Air (0.47)
- Aerospace & Defense > Aircraft (0.47)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Robots > Autonomous Vehicles > Drones (0.46)
SD-VLM: Spatial Measuring and Understanding with Depth-Encoded Vision-Language Models
Chen, Pingyi, Lou, Yujing, Cao, Shen, Guo, Jinhui, Fan, Lubin, Wu, Yue, Yang, Lin, Ma, Lizhuang, Ye, Jieping
While vision language models (VLMs) excel in 2D semantic visual understanding, their ability to quantitatively reason about 3D spatial relationships remains under-explored, due to the deficiency of 2D images' spatial representation ability. In this paper, we analyze the problem hindering VLMs' spatial understanding abilities and propose SD-VLM, a novel framework that significantly enhances fundamental spatial perception abilities of VLMs through two key contributions: (1) propose Massive Spatial Measuring and Understanding (MSMU) dataset with precise spatial annotations, and (2) introduce a simple depth positional encoding method strengthening VLMs' spatial awareness. MSMU dataset covers massive quantitative spatial tasks with 700K QA pairs, 2.5M physical numerical annotations, and 10K chain-of-thought augmented samples. We have trained SD-VLM, a strong generalist VLM which shows superior quantitative spatial measuring and understanding capability. SD-VLM not only achieves state-of-the-art performance on our proposed MSMU-Bench, but also shows spatial generalization abilities on other spatial understanding benchmarks including Q-Spatial and SpatialRGPT-Bench. Extensive experiments demonstrate that SD-VLM outperforms GPT-4o and Intern-VL3-78B by 26.91% and 25.56% respectively on MSMU-Bench. Code and models are released at https://github.com/cpystan/SD-VLM.
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Spatial Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.68)